908 research outputs found
Transcranial magnetic stimulation over sensorimotor cortex disrupts anticipatory reflex gain modulation for skilled action
Skilled interactions with new environments require flexible changes to the transformation from somatosensory signals to motor outputs. Transcortical reflex gains are known to be modulated according to task and environmental dynamics, but the mechanism of this modulation remains unclear. We examined reflex organization in the sensorimotor cortex. Subjects performed point- to- point arm movements into predictable force fields. When a small perturbation was applied just before the arm encountered the force field, reflex responses in the shoulder muscles changed according to the upcoming force field direction, indicating anticipatory reflex gain modulation. However, when a transcranial magnetic stimulation (TMS) was applied before the reflex response to such perturbations so that the silent period caused by TMS overlapped the reflex processing period, this modulation was abolished, while the reflex itself remained. Loss of reflex gain modulation could not be explained by reduced reflex amplitudes nor by peripheral effects of TMS on the muscles themselves. Instead, we suggest that TMS disrupted interneuronal networks in the sensorimotor cortex, which contribute to reflex gain modulation rather than reflex generation. We suggest that these networks normally provide the adaptability of rapid sensorimotor reflex responses by regulating reflex gains according to the current dynamical environment
DETERMINANT OF LEG SPRING STIFFNESS DURING MAXIMAL HOPPING
Understanding stiffness of the lower extremities during human movement may provide important information for developing more effective training methods during sports activities. It has been reported that leg stiffness (Kleg) during submaximal hopping depends primarily on ankle stiffness (Farley & Morgenroth, 1999), but the way stiffness is regulated in maximal hopping is unknown. The aim of the present study was to investigate a major determinant of the leg stiffness during maximal hopping
Experimental investigations of control principles of involuntary movement: a comprehensive review of the Kohnstamm phenomenon
The Kohnstamm phenomenon refers to the observation that if one pushes the arm hard outwards against a fixed surface for about 30Â s, and then moves away from the surface and relaxes, an involuntary movement of the arm occurs, accompanied by a feeling of lightness. Central, peripheral and hybrid theories of the Kohnstamm phenomenon have been advanced. Afferent signals may be irrelevant if purely central theories hold. Alternatively, according to peripheral accounts, altered afferent signalling actually drives the involuntary movement. Hybrid theories suggest afferent signals control a centrally-programmed aftercontraction via negative position feedback control or positive force feedback control. The Kohnstamm phenomenon has provided an important scientific method for comparing voluntary with involuntary movement, both with respect to subjective experience, and for investigating whether involuntary movements can be brought under voluntary control. A full review of the literature reveals that a hybrid model best explains the Kohnstamm phenomenon. On this model, a central adaptation interacts with afferent signals at multiple levels of the motor hierarchy. The model assumes that a Kohnstamm generator sends output via the same pathways as voluntary movement, yet the resulting movement feels involuntary due to a lack of an efference copy to cancel against sensory inflow. This organisation suggests the Kohnstamm phenomenon could represent an amplification of neuromotor processes normally involved in automatic postural maintenance. Future work should determine which afferent signals contribute to the Kohnstamm phenomenon, the location of the Kohnstamm generator, and the principle of feedback control operating during the aftercontraction
Somatosensory evoked potentials that index lateral inhibition are modulated according to the mode of perceptual processing: comparing or combining multi-digit tactile motion
Many perceptual studies focus on the brain’s capacity to discriminate between stimuli. However,
our normal experience of the world also involves integrating multiple stimuli into a single perceptual event. Neural mechanisms such as lateral inhibition are believed to enhance local differences
between sensory inputs from nearby regions of the receptor surface. However, this mechanism
would seem dysfunctional when sensory inputs need to be combined rather than contrasted. Here,
we investigated whether the brain can strategically regulate the strength of suppressive interactions that underlie lateral inhibition between finger representations in human somatosensory
processing. To do this, we compared sensory processing between conditions that required either
comparing or combining information. We delivered two simultaneous tactile motion trajectories to
index and middle fingertips of the right hand. Participants had to either compare the directions of
the two stimuli, or to combine them to form their average direction. To reveal preparatory tuning
of somatosensory cortex, we used an established event-related potential design to measure the
interaction between cortical representations evoked by digital nerve shocks immediately before
each tactile stimulus. Consistent with previous studies, we found a clear suppression between
cortical activations when participants were instructed to compare the tactile motion directions.
Importantly, this suppression was significantly reduced when participants had to combine the
same stimuli. These findings suggest that the brain can strategically switch between a comparative
and a combinative mode of somatosensory processing, according to the perceptual goal, by
preparatorily adjusting the strength of a process akin to lateral inhibition
Seeing motion of controlled object improves grip timing in adults with autism spectrum condition: evidence for use of inverse dynamics in motor control
Previous studies (Haswell et al. in Nat Neurosci 12:970–972, 2009; Marko et al. in Brain J Neurol 138:784–797, 2015) reported that people with autism rely less on vision for learning to reach in a force field. This suggested a possibility that they have difficulties in extracting force information from visual motion signals, a process called inverse dynamics computation. Our recent study (Takamuku et al. in J Int Soc Autism Res 11:1062–1075, 2018) examined the ability of inverse computation with two perceptual tasks and found similar performances in typical and autistic adults. However, this tested the computation only in the context of sensory perception while it was possible that the suspected disability is specific to the motor domain. Here, in order to address the concern, we tested the use of inverse dynamics computation in the context of motor control by measuring changes in grip timing caused by seeing/not seeing a controlled object. The motion of the object was informative of its inertial force and typical participants improved their grip timing based on the visual feedback. Our interest was on whether the autism participants show the same improvement. While some autism participants showed atypical hand slowing when seeing the controlled object, we found no evidence of abnormalities in the inverse computation in our grip timing task or in a replication of the perceptual task. This suggests that the ability of inverse dynamics computation is preserved not only for sensory perception but also for motor control in adults with autism
Sensorimotor signals underlying space perception: An investigation based on self-touch
Perception of space has puzzled scientists since antiquity, and is among the foundational questions of scientific psychology. Classical “local sign” theories assert that perception of spatial extent ultimately derives from efferent signals specifying the intensity of motor commands. Everyday cases of self-touch, such as stroking the left forearm with the right index fingertip, provide an important platform for studying spatial perception, because of the tight correlation between motor and tactile extents. Nevertheless, if the motor and sensory information in self-touch were artificially decoupled, these classical theories would clearly predict that motor signals – especially if self-generated rather than passive – should influence spatial perceptual judgements, but not vice versa. We tested this hypothesis by quantifying the contribution of tactile, kinaesthetic, and motor information to judgements of spatial extent. In a self-touch paradigm involving two coupled robots in master-slave configuration, voluntary movements of the right-hand produced simultaneous tactile stroking on the left forearm. Crucially, the coupling between robots was manipulated so that tactile stimulation could be shorter, equal, or longer in extent than the movement that caused it. Participants judged either the extent of the movement, or the extent of the tactile stroke. By controlling sensorimotor gains in this way, we quantified how motor signals influence tactile spatial perception, and vice versa. Perception of tactile extent was strongly biased by the amplitude of the movement performed. Importantly, touch also affected the perceived extent of movement. Finally, the effect of movement on touch was significantly stronger when movements were actively-generated compared to when the participant's right hand was passively moved by the experimenter. Overall, these results suggest that motor signals indeed dominate the construction of spatial percepts, at least when the normal tight correlation between motor and sensory signals is broken. Importantly, however, this dominance is not total, as classical theory might suggest
Neural dynamics of illusory tactile pulling sensations
Directional tactile pulling sensations are integral to everyday life, but their neural mechanisms remain unknown. Prior accounts hold that primary somatosensory (SI) activity is sufficient to generate pulling sensations, with alternative proposals suggesting that amodal frontal or parietal regions may be critical. We combined high-density EEG with asymmetric vibration, which creates an illusory pulling sensation, thereby unconfounding pulling sensations from unrelated sensorimotor processes. Oddballs that created opposite direction pulls to common stimuli were compared to the same oddballs after neutral common stimuli (symmetric vibration) and to neutral oddballs. We found evidence against the sensory-frontal N140 and in favor of the midline P200 tracking the emergence of pulling sensations, specifically contralateral parietal lobe activity 264-320ms, centered on the intraparietal sulcus. This suggests that SI is not sufficient to generate pulling sensations, which instead depend on the parietal association cortex, and may reflect the extraction of orientation information and related spatial processing
World model learning and inference
Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world
Interhemispheric communication during haptic self-perception
During the haptic exploration of a planar surface, slight resistances against the hand's movement are illusorily perceived as asperities (bumps) in the surface. If the surface being touched is one's own skin, an actual bump would also produce increased tactile pressure from the moving finger onto the skin. We investigated how kinaesthetic and tactile signals combine to produce haptic perceptions during self-touch. Participants performed two successive movements with the right hand. A haptic force-control robot applied resistances to both movements, and participants judged which movement was felt to contain the larger bump. An additional robot delivered simultaneous but task-irrelevant tactile stroking to the left forearm. These strokes contained either increased or decreased tactile pressure synchronized with the resistance-induced illusory bump encountered by the right hand. We found that the size of bumps perceived by the right hand was enhanced by an increase in left tactile pressure, but also by a decrease. Tactile event detection was thus transferred interhemispherically, but the sign of the tactile information was not respected. Randomizing (rather than blocking) the presentation order of left tactile stimuli abolished these interhemispheric enhancement effects. Thus, interhemispheric transfer during bimanual self-touch requires a stable model of temporally synchronized events, but does not require geometric consistency between hemispheric information, nor between tactile and kinaesthetic representations of a single common object
Collective Charge Excitation in a Dimer Mott Insulating System
Charge dynamics in a dimer Mott insulating system, where a non-polar
dimer-Mott (DM) phase and a polar charge-ordered (CO) phase compete with each
other, are studied. In particular, collective charge excitations are analyzed
in the three different models where the internal-degree of freedom in a dimer
is taken into account. Collective charge excitation exists both in the
non-polar DM phase and the polar CO phase, and softens in the phase boundary.
This mode is observable by the optical conductivity spectra where the light
polarization is parallel to the electric polarization in the polar CO phase.
Connections between the present theory and the recent experimental results in
kappa-(BEDT-TTF)2Cu2(CN)3 are discussed.Comment: 5 pages, 4 figure
- …